501 research outputs found

    CLNeRF: Continual Learning Meets NeRF

    Full text link
    Novel view synthesis aims to render unseen views given a set of calibrated images. In practical applications, the coverage, appearance or geometry of the scene may change over time, with new images continuously being captured. Efficiently incorporating such continuous change is an open challenge. Standard NeRF benchmarks only involve scene coverage expansion. To study other practical scene changes, we propose a new dataset, World Across Time (WAT), consisting of scenes that change in appearance and geometry over time. We also propose a simple yet effective method, CLNeRF, which introduces continual learning (CL) to Neural Radiance Fields (NeRFs). CLNeRF combines generative replay and the Instant Neural Graphics Primitives (NGP) architecture to effectively prevent catastrophic forgetting and efficiently update the model when new data arrives. We also add trainable appearance and geometry embeddings to NGP, allowing a single compact model to handle complex scene changes. Without the need to store historical images, CLNeRF trained sequentially over multiple scans of a changing scene performs on-par with the upper bound model trained on all scans at once. Compared to other CL baselines CLNeRF performs much better across standard benchmarks and WAT. The source code, and the WAT dataset are available at https://github.com/IntelLabs/CLNeRF. Video presentation is available at: https://youtu.be/nLRt6OoDGq0?si=8yD6k-8MMBJInQPsComment: Accepted to ICCV 202

    Consensus Maximization: Theoretical Analysis and New Algorithms

    Get PDF
    The core of many computer vision systems is model fitting, which estimates a particular mathematical model given a set of input data. Due to the imperfection of the sensors, pre-processing steps and/or model assumptions, computer vision data usually contains outliers, which are abnormally distributed data points that can heavily reduce the accuracy of conventional model fitting methods. Robust fitting aims to make model fitting insensitive to outliers. Consensus maximization is one of the most popular paradigms for robust fitting, which is the main research subject of this thesis. Mathematically, consensus maximization is an optimization problem. To understand the theoretical hardness of this problem, a thorough analysis about its computational complexity is first conducted. Motivated by the theoretical analysis, novel techniques that improve different types of algorithms are then introduced. On one hand, an efficient and deterministic optimization approach is proposed. Unlike previous deterministic approaches, the proposed one does not rely on the relaxation of the original optimization problem. This property makes it much more effective at refining an initial solution. On the other hand, several techniques are proposed to significantly accelerate consensus maximization tree search. Tree search is one of the most efficient global optimization approaches for consensus maximization. Hence, the proposed techniques greatly improve the practicality of globally optimal consensus maximization algorithms. Finally, a consensus-maximization-based method is proposed to register terrestrial LiDAR point clouds. It demonstrates how to surpass the general theoretical hardness by using special problem structure (the rotation axis returned by the sensors), which simplify the problem and lead to application-oriented algorithms that are both efficient and globally optimal.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    A stable gene selection in microarray data analysis

    Get PDF
    BACKGROUND: Microarray data analysis is notorious for involving a huge number of genes compared to a relatively small number of samples. Gene selection is to detect the most significantly differentially expressed genes under different conditions, and it has been a central research focus. In general, a better gene selection method can improve the performance of classification significantly. One of the difficulties in gene selection is that the numbers of samples under different conditions vary a lot. RESULTS: Two novel gene selection methods are proposed in this paper, which are not affected by the unbalanced sample class sizes and do not assume any explicit statistical model on the gene expression values. They were evaluated on eight publicly available microarray datasets, using leave-one-out cross-validation and 5-fold cross-validation. The performance is measured by the classification accuracies using the top ranked genes based on the training datasets. CONCLUSION: The experimental results showed that the proposed gene selection methods are efficient, effective, and robust in identifying differentially expressed genes. Adopting the existing SVM-based and KNN-based classifiers, the selected genes by our proposed methods in general give more accurate classification results, typically when the sample class sizes in the training dataset are unbalanced
    • …
    corecore